The carrier-based aircrafts on the carrier deck are dense and occluded, so that the carrier-based aircraft targets are difficult to detect, and the detection effect is easily affected by the lighting condition and target size. Therefore, an improved Faster R-CNN (Faster Region with Convolutional Neural Network) carrier-based aircraft target detection method was proposed. In this method, a loss function with a repulsion loss strategy was designed, and combined with multi-scale training, pictures collected under laboratory condition were used to train and test the deep convolutional neural network. Test experiments show that compared with the original Faster R-CNN detection model, the improved model has a better detection effect on occluded aircraft targets, the recall increased by 7 percentage points, and the precision increased by 6 percentage points. The experimental results show that the proposed improved method can automatically and comprehensively extract the characteristics of carrier-based aircraft targets, solve the detection problem of occluded carrier-based aircraft targets, has the detection accuracy and speed which can meet the actual needs, and has strong adaptability and high robustness under different lighting conditions and target sizes.
For the problem of time, effort and money consuming to obtain a large number of samples by conventional means faced by Artificial Intelligence (AI) application research in different fields, a variety of sample augmentation methods have been proposed in many AI research fields. Firstly, the research background and significance of data augmentation were introduced. Then, the methods of data augmentation in several common fields (including natural image recognition, character recognition and discourse parsing) were summarized, and on this basis, a detailed overview of sample acquisition or augmentation methods in the field of medical image assisted diagnosis was provided, including X-ray, Computed Tomography (CT), Magnetic Resonance Imaging (MRI) images. Finally, the key issues of data augmentation methods in AI application fields were summarized and the future development trends were prospected. It can be concluded that obtaining a sufficient number of broadly representative training samples is the key to the research and development of all AI fields. Both the common fields and the professional fields have conducted sample augmentation, and different fields or even different research directions in the same field have different sample acquisition or augmentation methods. In addition, sample augmentation is not simply to increase the number of samples, but to reproduce the existence of real samples that cannot be completely covered by small sample size as far as possible, so as to improve sample diversity and enhance AI system performance.
Concerning that the high performance requirement of face pose estimation system which could not run on mobile phone in real time, a real-time face pose estimation system was realized for Android mobile phone terminals. First of all, one positive face image and one face image with a certain offset angle were obtained by the camera for establishing a simple 3D face model by Structure from Motion (SfM) algorithm. Secondly, the system extracted corresponding feature points from the real-time face image to 3D face model. The 3D face pose parameters were got by POSIT (Pose from Orthography and Scaling with ITeration) algorithm. At last, the 3D face model was displayed on Android mobile terminals in real-time using OpenGL (Open Graphics Library). The experimental results showed that the speed of detecting and displaying the face pose was up to 20 frame/s in the real-time video, which is close to 3D face pose estimation algorithm based on the affine correspondance on computer terminals; and the speed of detecting a large number of image sequences reached 50 frame/s. The results indicate that the system can satisfy the performance requirement for Android mobile phone terminals and real-time requirement of detecting the face pose.
Concerning the low server utilization and complicated energy management caused by block random placement strategy in distributed file systems, the vector of the visiting feature on data block was built to depict the behavior of the random block accessing. K-means algorithm was adopted to do the clustering calculation according to the calculation result, then the datanodes were divided into multiple regions to store different cluster data blocks. The data blocks were dynamic reconfigured according to the clustering calculation results when the system load is low. The unnecessary datanodes could sleep to reduce the energy consumption. The flexible set of distance parameters between clusters made the strategy be suitable for different scenarios that has different requests for the energy consumption and utilization. Compared with hot-cold zoning strategies, the mathematical analysis and experimental results prove that the proposed method has a higher energy saving efficiency, the energy consumption reduces by 35% to 38%.
In order to overcome the oscillation caused by hard threshold wavelet filtering and the waveform distortion brought by soft threshold wavelet filtering, a wavelet threshold de-noising method based on genetic optimization function curve named GOCWT was proposed. In the GOCWT, a quadratic function was used to simulate the optimal threshold function curve. The Root Mean Square Error (RMSE) and smoothness of the reconstructed signal were applied to design the fitness function. Furthermore, the Genetic Algorithm (GA) was utilized to optimize the parameters of the new thresholding function. Through the analysis of 48 segments of ECG signals, it was found that the new method resulted in a 36% increase of smoothness value comparing to the hard threshold method, and a 32% decrease of RMSE value comparing to the soft threshold method. The results show that the proposed algorithm outperforms hard threshold wavelet filtering and soft threshold wavelet filtering, it can not only avoid the undesirable oscillation phenomenon of the filtered signal, but also reserve the minute features of the signal including peak value.
The emergence of RAMCloud has improved user experience of Online Data-Intensive (OLDI) applications. However, its energy consumption is higher than traditional cloud data centers. An energy-efficient strategy for disks under this architecture was put forward to solve this problem. Firstly, the fitness function and roulette wheel selection which belong to genetic algorithm were introduced to choose those energy-saving disks to implement persistent data backup; secondly, reasonable buffer size was needed to extend average continuous idle time of disks, so that some of them could be put into standby during their idle time. The simulation experimental results show that the proposed strategy can effectively save energy by about 12.69% in a given RAMCloud system with 50 servers. The buffer size has double impacts on energy-saving effect and data availability, which must be weighed.
For low server utilization and serious energy consumption waste problems in cloud computing environment, an energy-efficient strategy for dynamic management of cloud storage replica based on user visiting characteristic was put forward. Through transforming the study of the user visiting characteristics into calculating the visiting temperature of Block, DataNode actively applied for sleeping so as to achieve the goal of energy saving according to the global visiting temperature.The dormant application and dormancy verifying algorithm was given in detail, and the strategy concerning how to deal with the visit during DataNode dormancy was described explicitly. The experimental results show that after adopting this strategy, 29%-42% DataNode can sleep, energy consumption reduces by 31%, and server response time is well. The performance analysis show that the proposed strategy can effectively reduce the energy consumption while guaranteeing the data availability.
IPv6 Neighbor Cache (NC) was very vulnerable to be attacked, therefore, an improved method named Reversed Detection Plus (RD+) was proposed. Timestamp and sequence were firstly introduced to limit strict time of response and response matching respectively; RD+ queue was defined to store timestamp and sequence, and Random Early Detection Based on Timestamp (RED-T) algorithm was designed to prevent Denial of Service (DoS) attacks. The experimental results show that RD+ can effectively protect IPv6 NC to resist spoofing and DoS attacks, and compared with Heuristic and Explicit (HE) and Secure Neighbor Discovery (SEND), RD+ has a low consumption of resources.
To deal with the problem of inter-bank money laundering, combined with limited information management methods, a new anti-money laundering model was presented with central bank High-Value Payment System (HVPS) architecture. The proposed model utilized distributed monitor nodes to trace money laundering crimes. And it used event description method to record the crime procedures and so on. A new grey relational information fusion algorithm was invented to integrate multi-monitor information. And an improved power spectral algorithm was proposed to deal with fast data analysis and money laundering recognition operations. The simulation results show that the model has better processing performance and anti-money laundering recognition accuracy than others do. In detail, the model does well in money-laundering client coverage (by 12%), discovery rates (by 12%) and recall rates (by 5%).
A runtime error is generated in the course of the program's dynamic execution. When the error occurred, it needs to use traditional debug tools to analyze the cause of the error.For the real execution environment of some exception and multi-thread can not be reproduced, the traditional debug analysis means is not obvious. If the variable information can be captured during the program execution, the runtime error site will be caught, which is used as a basis for analysis of the cause of the error. In this paper, the technology of capture runtime error site based on variable tracking was proposed; it can capture specific variable information according to user needs, and effectively improved the flexibility of access to variable information. Based on it, a tool named Runtime Fault Site Analysis (RFST) was implemented, which could be used to analyze error cause and provide error site and aided analysis approach as well.
Due to the space-time continuity of the physical attributes, such as temperature and illumination, high spatio-temporal correlation exists among the sensed data in the high-density Wireless Sensor Network (WSN). The data redundancy produced by the correlation brings heavy burden to network communication and shortens the networks lifetime. A Clustered Data Collection Framework (CDCF) based on prediction model was proposed to explore the data correlation and reduce the network traffic. The framework included a time series prediction model based on curve fitting least square method and an efficient error control strategy. In the process of data collection, the clustered structure considered the spatial correlation, and the time series prediction model investigated the temporal correlation existing in sensed data. The experimental simulation proves that CDCF used only 10%—20% of the amount of raw data to finish the data collection of the networks in the relatively stable environment, and the error of the data restored in sink is less than the threshold value which defined by user.